今天要來說的是 Kubernetes 中 Volume 的進階版 StorageClass (我前面快速帶過 PersistentVolumeClaim 、 PersistentVolume 就是要講它啦,雖然拖得有點久)。
之前說過 Volume 、 PersistentVolumeClaim 、 PersistentVolume 。 Volume 就是簡單暴力的幫你存在節點上,至於在哪個節點就要自己去尋找;PersistentVolumeClaim 跟 PersistentVolume 的配合雖然避免了 Volume 的問題,但是在建立一個 PersistentVolumeClaim 時就需要有對應的 PersistentVolume 來配合,有時候就顯得稍微麻煩一點。
那這時候就是 StorageClass 使用的時候了, StorageClass 會在 PersistentVolumeClaim 請求空間的時候動態分配一個 PersistentVolume 給它,使用上就更方便一點(還有其他優點啦,但我目前只對這個比較有感覺)。
然後我們就需要啟動很久之前開好但一直沒再用的第四台虛擬機,照著下面指令安裝好 NFS Server 。
$ sudo apt update
$ sudo apt install -y nfs-kernel-server
$ sudo mkdir -m 777 /export
$ sudo sed -i '$a /export *(rw,fsid=0,sync)' /etc/exports
$ sudo exportfs -ar
$ sudo cat /proc/fs/nfsd/versions
$ sudo exportfs -v
$ sudo systemctl status nfs-server --no-page
或是各位想跟我一樣偷懶可以使用我寫好的 k8s_nfs_server.sh 來安裝。
然後就是非常關鍵的一點,我們要先到 Worker Node 安裝好 NFS Client (我之前不小心忘了裝,然後結果就是花了一天在找問題)。
$ sudo apt install nfs-common
接著要先建立一個服務來替你維護 StorageClass 的動態配置。
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner-sa
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner-sa
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: leader-locking-nfs-client-provisioner
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: leader-locking-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner-sa
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner-deploy
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner-pod
template:
metadata:
labels:
app: nfs-client-provisioner-pod
spec:
serviceAccountName: nfs-client-provisioner-sa
containers:
- name: nfs-client-provisioner-container
image: quay.io/external_storage/nfs-client-provisioner:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: my-nfs-provisioner
- name: NFS_SERVER
value: 10.0.2.40
- name: NFS_PATH
value: /export
volumes:
- name: nfs-client-root
nfs:
server: 10.0.2.40
path: /export
最後就可以建立 StorageClass 、 Pod 跟 PersistentVolumeClaim ,然後你就會發現雖然沒有對應的 PersistentVolume ,但是 StorageClass 動態的幫你配置了 PersistentVolume 去跟 PersistentVolumeClaim 綁定了。
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-sc
provisioner: my-nfs-provisioner
---
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: ubuntu
image: ubuntu:20.04
args:
[
bash,
-c,
'for ((i = 0; ; i++)); do echo "$i: $(date)"; sleep 100; done',
]
volumeMounts:
- mountPath: /volume
name: test-volume
volumes:
- name: test-volume
persistentVolumeClaim:
claimName: nfs-pvc
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: nfs-sc
resources:
requests:
storage: 1Gi
那麼就先到這邊,明天開始就要離開 Kubernetes 的世界,暫時先講一下其他東西。
大家掰~掰~